ensure responsible ai use
Banks need to do more to ensure responsible AI use
The hype around artificial intelligence (AI) has skyrocketed since the launch of ChatGPT, the chatbot from OpenAI. In just two months, ChatGPT was estimated to have reached 100 million monthly active users, with wide-ranging use cases including writing essays, debugging code and composing music. Such a leap in functionality and adoption prompted leading lights in the technology industry to call for a'pause' in the development of powerful AI systems. On March 22, the non-profit organisation Future of Life Institute published an open letter urging AI research facilities to put a stop to the creation of systems that can match human intelligence. More than 50,000 industry figures -- including CEO of SpaceX, Tesla and Twitter Elon Musk; Apple co-founder Steve Wozniak; and Chris Larsen, co-founder of Ripple -- have added their signatures to halt the training of models larger than GPT-4, the newest version of OpenAI's language model system.
- North America > United States (0.16)
- Europe > United Kingdom (0.16)
- North America > Canada > Ontario > Toronto (0.05)
- Banking & Finance (1.00)
- Information Technology (0.71)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.47)
Most Health Organizations Can't Ensure Responsible AI Use - InformationWeek
Despite a growing interest in artificial intelligence, most healthcare organizations still lack the tools necessary to ensure responsible use of such technologies, finds a report from Accenture Health. According to the report, Digital Health Technology Vision 2018, 81% of healthcare executives said they are not yet prepared to face the societal and liability issues needed to explain their AI systems' decisions. Additionally, while 86% of respondents said that their organizations are using data to drive automated decision-making, the same proportion (86%) report they have not invested in the capabilities needed to verify data sources across their most critical systems. Kaveh Safavi, head of Accenture's health practice, observed that the current lack of AI data verification investment activity is exposing healthcare organizations to inaccurate, manipulated and biased data that can lead to corrupted insights and skewed results. "The 86% figure is critical," he stated, "given that 24% of executives also said that they have been the target of adversarial AI behaviors, such as falsified location data or bot fraud on more than one occasion." On a positive note, the study found that 73% of respondents plan to develop internal ethical standards for AI to ensure that their systems act responsibly.